253 research outputs found
On the inclusion probabilities in some unequal probability sampling plans without replacement
Comparison results are obtained for the inclusion probabilities in some
unequal probability sampling plans without replacement. For either successive
sampling or H\'{a}jek's rejective sampling, the larger the sample size, the
more uniform the inclusion probabilities in the sense of majorization. In
particular, the inclusion probabilities are more uniform than the drawing
probabilities. For the same sample size, and given the same set of drawing
probabilities, the inclusion probabilities are more uniform for rejective
sampling than for successive sampling. This last result confirms a conjecture
of H\'{a}jek (Sampling from a Finite Population (1981) Dekker). Results are
also presented in terms of the Kullback--Leibler divergence, showing that the
inclusion probabilities for successive sampling are more proportional to the
drawing probabilities.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ337 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Relative log-concavity and a pair of triangle inequalities
The relative log-concavity ordering between probability
mass functions (pmf's) on non-negative integers is studied. Given three pmf's
that satisfy , we present a
pair of (reverse) triangle inequalities: if
then and if then
where denotes the
Kullback--Leibler divergence. These inequalities, interesting in themselves,
are also applied to several problems, including maximum entropy
characterizations of Poisson and binomial distributions and the best binomial
approximation in relative entropy. We also present parallel results for
continuous distributions and discuss the behavior of under
convolution.Comment: Published in at http://dx.doi.org/10.3150/09-BEJ216 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
A Bit of Information Theory, and the Data Augmentation Algorithm Converges
The data augmentation (DA) algorithm is a simple and powerful tool in
statistical computing. In this note basic information theory is used to prove a
nontrivial convergence theorem for the DA algorithm
On a Multiplicative Algorithm for Computing Bayesian D-optimal Designs
We use the minorization-maximization principle (Lange, Hunter and Yang 2000)
to establish the monotonicity of a multiplicative algorithm for computing
Bayesian D-optimal designs. This proves a conjecture of Dette, Pepelyshev and
Zhigljavsky (2008)
Some stochastic inequalities for weighted sums
We compare weighted sums of i.i.d. positive random variables according to the
usual stochastic order. The main inequalities are derived using majorization
techniques under certain log-concavity assumptions. Specifically, let be
i.i.d. random variables on . Assuming that has a
log-concave density, we show that is stochastically smaller than
, if is majorized by . On the other hand, assuming that has a log-concave density for
some , we show that is stochastically larger than , if is majorized by , where
. These unify several stochastic ordering results for specific
distributions. In particular, a conjecture of Hitczenko [Sankhy\={a} A 60
(1998) 171--175] on Weibull variables is proved. Potential applications in
reliability and wireless communications are mentioned.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ302 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
- β¦